8 research outputs found

    Motion Segmentation Aided Super Resolution Image Reconstruction

    Get PDF
    This dissertation addresses Super Resolution (SR) Image Reconstruction focusing on motion segmentation. The main thrust is Information Complexity guided Gaussian Mixture Models (GMMs) for Statistical Background Modeling. In the process of developing our framework we also focus on two other topics; motion trajectories estimation toward global and local scene change detections and image reconstruction to have high resolution (HR) representations of the moving regions. Such a framework is used for dynamic scene understanding and recognition of individuals and threats with the help of the image sequences recorded with either stationary or non-stationary camera systems. We introduce a new technique called Information Complexity guided Statistical Background Modeling. Thus, we successfully employ GMMs, which are optimal with respect to information complexity criteria. Moving objects are segmented out through background subtraction which utilizes the computed background model. This technique produces superior results to competing background modeling strategies. The state-of-the-art SR Image Reconstruction studies combine the information from a set of unremarkably different low resolution (LR) images of static scene to construct an HR representation. The crucial challenge not handled in these studies is accumulating the corresponding information from highly displaced moving objects. In this aspect, a framework of SR Image Reconstruction of the moving objects with such high level of displacements is developed. Our assumption is that LR images are different from each other due to local motion of the objects and the global motion of the scene imposed by non-stationary imaging system. Contrary to traditional SR approaches, we employed several steps. These steps are; the suppression of the global motion, motion segmentation accompanied by background subtraction to extract moving objects, suppression of the local motion of the segmented out regions, and super-resolving accumulated information coming from moving objects rather than the whole scene. This results in a reliable offline SR Image Reconstruction tool which handles several types of dynamic scene changes, compensates the impacts of camera systems, and provides data redundancy through removing the background. The framework proved to be superior to the state-of-the-art algorithms which put no significant effort toward dynamic scene representation of non-stationary camera systems

    Clustered Exact Daum-Huang Particle Flow Filter

    No full text
    Unlike the conventional particle filters, particle flow filters do not rely on proposal density and importance sampling; they employ flow of the particles through a methodology derived from the log-homotopy scheme and ensure successful migration of the particles. Amongst the efficient implementations of particle filters, Exact Daum-Huang (EDH) filter pursues the calculation of migration parameters all together. An improved version of it, Localized Exact Daum-Huang (LEDH) filter, calculates the migration parameters separately. In this study, the main objective is to reduce the cost of calculation in LEDH filters which is due to exhaustive calculation of each migration parameter. We proposed the Clustered Exact Daum-Huang (CEDH) filter. The main impact of CEDH is the clustering of the particles considering the ones producing similar errors and then calculating the same migration parameters for the particles within each cluster. Through clustering and handling the particles with high errors, their engagement and influence can be balanced, and the system can greatly reduce the negative effects of such particles on the overall system. We implement the filter successfully for the scenario of high dimensional target tracking. The results are compared to those obtained with EDH and LEDH filters to validate its efficiency

    A nested autoencoder approach to automated defect inspection on textured surfaces

    No full text
    In recent years, there has been a highly competitive pressure on industrial production. To keep ahead of the competition, emerging technologies must be developed and incorporated. Automated visual inspection systems, which improve the overall mass production quantity and quality in lines, are crucial. The modifications of the inspection system involve excessive time and money costs. Therefore, these systems should be flexible in terms of fulfilling the changing requirements of high capacity production support. A coherent defect detection model as a primary application to be used in a real-time intelligent visual surface inspection system is proposed in this paper. The method utilizes a new approach consisting of nested autoencoders trained with defect-free and defect injected samples to detect defects. Making use of two nested autoencoders, the proposed approach shows great performance in eliminating defects. The first autoencoder is used essentially for feature extraction and reconstructing the image from these features. The second one is employed to identify and fix defects in the feature code. Defects are detected by thresholding the difference between decoded feature code outputs of the first and the second autoencoder. The proposed model has a 96% detection rate and a relatively good segmentation performance while being able to inspect fabrics driven at high speeds
    corecore